29 research outputs found

    Imitation learning of whole-body grasps

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 152-154).Humans often learn to manipulate objects by observing other people. In much the same way, robots can use imitation learning to pick up useful skills. A system is demonstrated here for using imitation learning to teach a robot to grasp objects using both hand and whole-body grasps, which use the arms and torso as well as hands. Demonstration grasp trajectories are created by teleoperating a simulated robot to pick up simulated objects, and stored as sequences of keyframes in which contacts with the object are gained or lost. When presented with a new object, the system compares it against the objects in a stored database to pick a demonstrated grasp used on a similar object. Both objects are modeled as a combination of primitives-boxes, cylinders, and spheres-and the primitives for each object are grouped into 'functional groups' that geometrically match parts of the new object with similar parts of the demonstration object. These functional groups are then used to map contact points from the demonstration object to the new object, and the resulting adapted keyframes are adjusted and checked for feasibility. Finally, a trajectory is found that moves among the keyframes in the adapted grasp sequence, and the full trajectory is tested for feasibility by executing it in the simulation. The system successfully uses this method to pick up 92 out of 100 randomly generated test objects in simulation.by Kaijen Hsiao.S.M

    Towards reliable grasping and manipulation in household environments

    Get PDF
    Abstract We present a complete software architecture for reliable grasping of household objects. Our work combines aspects such as scene interpretation from 3D range data, grasp planning, motion planning, and grasp failure identification and recovery using tactile sensors. We build upon, and add several new contributions to the significant prior work in these areas. A salient feature of our work is the tight coupling between perception (both visual and tactile) and manipulation, aiming to address the uncertainty due to sensor and execution errors. This integration effort has revealed new challenges, some of which can be addressed through system and software engineering, and some of which present opportunities for future research. Our approach is aimed at typical indoor environments, and is validated by long running experiments where the PR2 robotic platform was able to consistently grasp a large variety of known and unknown objects. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS)

    Robust grasping under object pose uncertainty

    Get PDF
    This paper presents a decision-theoretic approach to problems that require accurate placement of a robot relative to an object of known shape, such as grasping for assembly or tool use. The decision process is applied to a robot hand with tactile sensors, to localize the object on a table and ultimately achieve a target placement by selecting among a parameterized set of grasping and information-gathering trajectories. The process is demonstrated in simulation and on a real robot. This work has been previously presented in Hsiao et al. (Workshop on Algorithmic Foundations of Robotics (WAFR), 2008; Robotics Science and Systems (RSS), 2010) and Hsiao (Relatively robust grasping, Ph.D. thesis, Massachusetts Institute of Technology, 2009).National Science Foundation (U.S.) (Grant 0712012

    The Little Hopping Kangaroo Robot That Tried

    No full text

    Relatively Robust Grasping

    Get PDF
    This thesis presents an approach for grasping objects robustly under significant positional uncertainty. In the field of robot manipulation there has been a great deal of work on how to grasp objects stably, and in the field of robot motion planning there has been a great deal of work on how to find collision-free paths to those grasp positions. However, most of this work assumes exact knowledge of the shapes and positions of both the object and the robot; little work has been done on how to grasp objects robustly in the presence of position uncertainty. To reason explicitly about uncertainty while grasping, we model the problem as a partially observable Markov decision process (POMDP). We derive a closed-loop strategy that maintains a belief state (a probability distribution over world states), and select actions with a receding horizon using forward search through the belief space. Our actions are world-relative trajectories (WRT): fixed trajectories expressed relative to the most-likely state of the world. We localize the object, ensure its reachability, and robustly grasp it a
    corecore